分子动力学(MD)模拟是各种科学领域的主力,但受到高计算成本的限制。基于学习的力场在加速AB-Initio MD模拟方面取得了重大进展,但对于许多需要长期MD仿真的现实世界应用程序仍然不够快。在本文中,我们采用了一种不同的机器学习方法,使用图形群集将物理系统粗糙化,并使用图形神经网络使用非常大的时间整合步骤对系统演变进行建模。一个新型的基于分数的GNN改进模块解决了长期模拟不稳定性的长期挑战。尽管仅接受了简短的MD轨迹数据训练,但我们学到的模拟器仍可以推广到看不见的新型系统,并比训练轨迹更长的时间。需要10-100 ns级的长时间动力学的属性可以在多个刻度级的速度上准确恢复,而不是经典的力场。我们证明了方法对两个现实的复杂系统的有效性:(1)隐式溶剂中的单链粗粒聚合物; (2)多组分锂离子聚合物电解质系统。
translated by 谷歌翻译
图像引导放射疗法中的CBCT为患者的设置和计划评估提供了关键的解剖学信息。纵向CBCT图像登记可以量化分裂间的解剖变化。这项研究的目的是提出一个无监督的基于深度学习的CBCT-CBCT变形图像登记。提出的可变形注册工作流程包括训练和推理阶段,这些培训和推理阶段通过基于空间转换的网络(STN)共享相同的进率前路。 STN由全球生成对抗网络(Globalgan)和本地GAN(Localgan)组成,分别预测了粗略和细尺度运动。通过最小化图像相似性损失和可变形矢量场(DVF)正则化损失,而无需监督地面真实DVF的训练,对网络进行了训练。在推理阶段,训练有素的Localgan预测了局部DVF的斑块,并融合形成全图像DVF。随后将局部全图像DVF与Globalgan生成的DVF合并以获得最终的DVF。在实验中,使用来自20名腹部癌症患者的100个分数CBCT评估了该方法,并在保持测试中来自21名不同腹部癌症患者的队列中的105个分数CBCT。从定性上讲,注册结果显示了变形的CBCT图像与目标CBCT图像之间的对齐。定量地,在基准标记和手动确定的地标计算的平均目标注册误差(TRE)为1.91+-1.11 mm。变形CBCT和目标CBCT之间的平均平均绝对误差(MAE),归一化的跨相关性(NCC)分别为33.42+-7.48 HU,0.94+-0.04。这种有希望的注册方法可以提供快速准确的纵向CBCT对准,以促进分流的解剖变化分析和预测。
translated by 谷歌翻译
合成健康数据在共享数据以支持生物医学研究和创新医疗保健应用的发展时有可能减轻隐私问题。基于机器学习,尤其是生成对抗网络(GAN)方法的现代方法生成的现代方法继续发展并表现出巨大的潜力。然而,缺乏系统的评估框架来基准测试方法,并确定哪些方法最合适。在这项工作中,我们引入了一个可推广的基准测试框架,以评估综合健康数据的关键特征在实用性和隐私指标方面。我们将框架应用框架来评估来自两个大型学术医疗中心的电子健康记录(EHRS)数据的合成数据生成方法。结果表明,共享合成EHR数据存在公用事业私人关系权衡。结果进一步表明,在每个用例中,在所有标准上都没有明确的方法是最好的,这使得为什么需要在上下文中评估合成数据生成方法。
translated by 谷歌翻译
现有的数据集用于训练窄带射频(RF)信号分类的深度学习模型缺乏信号类型和渠道障碍的多样性,无法充分评估现实世界中的模型性能。我们介绍了SIG53数据集,该数据集由500万个合成生成的样品组成,来自53个不同的信号类别和专业选择的损害。我们还介绍了Torchsig,这是一种信号处理机学习工具包,可用于生成此数据集。 Torchsig结合了视觉域共有的数据处理原理,它旨在作为未来信号机器学习研究的开源基础。使用SIG53数据集的初始实验是使用最新技术(SOTA)卷积神经网络(Convnets)和变压器进行的。这些实验揭示了变形金刚在不需要额外正规化或转向师教师的情况下优于转向器,这与视觉领域的结果相反。其他实验表明,火炬的特定于域的数据增强功能有助于模型培训,最终使模型性能受益。最后,Torchsig在训练时支持即时的合成数据创建,从而可以通过几乎无限的数据集实现大规模训练会话。
translated by 谷歌翻译
Recently developed methods for video analysis, especially models for pose estimation and behavior classification, are transforming behavioral quantification to be more precise, scalable, and reproducible in fields such as neuroscience and ethology. These tools overcome long-standing limitations of manual scoring of video frames and traditional "center of mass" tracking algorithms to enable video analysis at scale. The expansion of open-source tools for video acquisition and analysis has led to new experimental approaches to understand behavior. Here, we review currently available open-source tools for video analysis and discuss how to set up these methods for labs new to video recording. We also discuss best practices for developing and using video analysis methods, including community-wide standards and critical needs for the open sharing of datasets and code, more widespread comparisons of video analysis methods, and better documentation for these methods especially for new users. We encourage broader adoption and continued development of these tools, which have tremendous potential for accelerating scientific progress in understanding the brain and behavior.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding concept proposed for data visualization. However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning. In this paper, we address the challenges of learning tabular data in contrast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters. Unlike current methods, the proposed approach independently defines the Gaussian embedding and the target cluster distribution to accommodate any clustering algorithm in representation learning. A trained G-CEALS model extracts a quality embedding for unseen test data. Based on the embedding clustering accuracy, the average rank of the proposed G-CEALS method is 1.4 (0.7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets. This paper shows one of the first algorithms to jointly learn embedding and clustering to improve multivariate tabular data representation in downstream clustering.
translated by 谷歌翻译
In both terrestrial and marine ecology, physical tagging is a frequently used method to study population dynamics and behavior. However, such tagging techniques are increasingly being replaced by individual re-identification using image analysis. This paper introduces a contrastive learning-based model for identifying individuals. The model uses the first parts of the Inception v3 network, supported by a projection head, and we use contrastive learning to find similar or dissimilar image pairs from a collection of uniform photographs. We apply this technique for corkwing wrasse, Symphodus melops, an ecologically and commercially important fish species. Photos are taken during repeated catches of the same individuals from a wild population, where the intervals between individual sightings might range from a few days to several years. Our model achieves a one-shot accuracy of 0.35, a 5-shot accuracy of 0.56, and a 100-shot accuracy of 0.88, on our dataset.
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译